Goto

Collaborating Authors

 academic integrity


Human-AI Collaboration or Academic Misconduct? Measuring AI Use in Student Writing Through Stylometric Evidence

Oliveira, Eduardo Araujo, Mohoni, Madhavi, López-Pernas, Sonsoles, Saqr, Mohammed

arXiv.org Artificial Intelligence

Human - Artificial Intelligence (HAI) collaboration in writing offers opportunities to enhance efficiency and boost student confidence; however, it also carries risks, such as reduced creativity, over - reliance on AI - generated content, and academic integrity (Kim & Lee, 2023) . While the ethical use of AI in education is widely acknowledged as a way to enhance student learning (Cotton et al., 2023; Foltynek et al., 2023), the rise of Unauthorised Content Generation (UCG) presents a significant challenge to academic misconduct. Measuring the extent and nature of HAI collaboration in academic contexts remains a critical challenge for educators, particularly as generative AI (genAI) tools become increasingly available and integrated into educational settings (Atchley et al., 2024; E. Oliveira et al., 2023) . Distinguishing AI - generated text from human - authored content is necessary for understanding student learning behaviours, supporting skill development, and maintaining academic integrity. Analysing student writing patterns can help educators evaluate how st udents engage with AI tools, track their writing skill progression, and identify areas where additional support is needed (Pan et al., 2025). Existing detection tools for AI - assisted misconduct often lack reliability, explainability, and resilience to circ umvention strategies such as paraphrasing (Cotton et al., 2023) . These challenges highlight the need for innovative, transparent, and robust approaches to address the unacknowledged use of genAI in HAI collaboration within academic writing (Kasneci et al., 2023) .


Navigating the New Landscape: A Conceptual Model for Project-Based Assessment (PBA) in the Age of GenAI

Kadel, Rajan, Shailendra, Samar, Saxena, Urvashi Rahul

arXiv.org Artificial Intelligence

The rapid integration of Generative Artificial Intelligence (GenAI) into higher education presents both opportunities and challenges for assessment design, particularly within Project-Based Assessment (PBA) contexts. Traditional assessment methods often emphasise the final product in the PBA, which can now be significantly influenced or created by GenAI tools, raising concerns regarding product authenticity, academic integrity, and learning validation. This paper advocates for a reimagined assessment model for Project-Based Learning (PBL) or a capstone project that prioritises process-oriented evaluation, multi-modal and multifaceted assessment design, and ethical engagement with GenAI to enable higher-order thinking. The model also emphasises the use of (GenAI-assisted) personalised feedback by a supervisor as an observance of the learning process during the project lifecycle. A use case scenario is provided to illustrate the application of the model in a capstone project setting. The paper concludes with recommendations for educators and curriculum designers to ensure that assessment practices remain robust, learner-centric, and integrity-driven in the evolving landscape of GenAI.


Adapting University Policies for Generative AI: Opportunities, Challenges, and Policy Solutions in Higher Education

Beale, Russell

arXiv.org Artificial Intelligence

The rapid proliferation of generative artificial intelligence (AI) tools - especially large language models (LLMs) such as ChatGPT - has ushered in a transformative era in higher education. Universities in developed regions are increasingly integrating these technologies into research, teaching, and assessment. On one hand, LLMs can enhance productivity by streamlining literature reviews, facilitating idea generation, assisting with coding and data analysis, and even supporting grant proposal drafting. On the other hand, their use raises significant concerns regarding academic integrity, ethical boundaries, and equitable access. Recent empirical studies indicate that nearly 47% of students use LLMs in their coursework - with 39% using them for exam questions and 7% for entire assignments - while detection tools currently achieve around 88% accuracy, leaving a 12% error margin. This article critically examines the opportunities offered by generative AI, explores the multifaceted challenges it poses, and outlines robust policy solutions. Emphasis is placed on redesigning assessments to be AI-resilient, enhancing staff and student training, implementing multi-layered enforcement mechanisms, and defining acceptable use. By synthesizing data from recent research and case studies, the article argues that proactive policy adaptation is imperative to harness AI's potential while safeguarding the core values of academic integrity and equity.


Student Perspectives on the Benefits and Risks of AI in Education

Pitts, Griffin, Marcus, Viktoria, Motamedi, Sanaz

arXiv.org Artificial Intelligence

The use of chatbots equipped with artificial intelligence (AI) in educational settings has increased in recent years, showing potential to support teaching and learning. However, the adoption of these technologies has raised concerns about their impact on academic integrity, students' ability to problem-solve independently, and potential underlying biases. To better understand students' perspectives and experiences with these tools, a survey was conducted at a large public university in the United States. Through thematic analysis, 262 undergraduate students' responses regarding their perceived benefits and risks of AI chatbots in education were identified and categorized into themes. The results discuss several benefits identified by the students, with feedback and study support, instruction capabilities, and access to information being the most cited. Their primary concerns included risks to academic integrity, accuracy of information, loss of critical thinking skills, the potential development of overreliance, and ethical considerations such as data privacy, system bias, environmental impact, and preservation of human elements in education. While student perceptions align with previously discussed benefits and risks of AI in education, they show heightened concerns about distinguishing between human and AI generated work - particularly in cases where authentic work is flagged as AI-generated. To address students' concerns, institutions can establish clear policies regarding AI use and develop curriculum around AI literacy. With these in place, practitioners can effectively develop and implement educational systems that leverage AI's potential in areas such as immediate feedback and personalized learning support. This approach can enhance the quality of students' educational experiences while preserving the integrity of the learning process with AI.


Generative Knowledge Production Pipeline Driven by Academic Influencers

Feher, Katalin, Demeter, Marton

arXiv.org Artificial Intelligence

ABSTRACT Generative AI transforms knowledge production, validation, and dissemination, raising academic integrity and credibility concerns. This study examines 53 academic influencer videos that reached 5.3 million viewers to identify an emerging, structured, implementation-ready pipeline balancing originality, ethical compliance, and human-AI collaboration despite the disruptive impacts. Findings highlight generative AI's potential to automate publication workflows and democratize participation in knowledge production while challenging traditional scientific norms. Academic influencers emerge as key intermediaries in this paradigm shift, connecting bottom-up practices with institutional policies to improve adaptability. Accordingly, the study proposes a generative publication production pipeline and a policy framework for co-intelligence adaptation and reinforcing credibility-centered standards in AI-powered research. These insights support scholars, educators, and policymakers in understanding AI's transformative impact by advocating responsible and innovation-driven knowledge production. Additionally, they reveal pathways for automating best practices, optimizing scholarly workflows, and fostering creativity in academic research and publication. Keywords: generative AI, ChatPGT, academic integrity, influencers, knowledge production, social media, policy implications, academic policy 1. INTRODUCTION The advent of generative AI (GenAI) transforms knowledge production, increasingly supporting and partially automating the academic workflow (Bolanos et al. 2024). This trend suggests a paradigm shift where researchers utilize effectively and productively generative AI tools, potentially leading to more automated scientific workflows. However, we have also identified a human component in this process: the impact of the academic influencers via social media promoting hands-on knowledge about GenAI in academic projects.


Beyond Detection: Designing AI-Resilient Assessments with Automated Feedback Tool to Foster Critical Thinking

Akbar, Muhammad Sajjad

arXiv.org Artificial Intelligence

ARTICLE TEMPLATE Beyond Detection: Designing AI-Resilient Assessments with Automated Feedback Tool to Foster Critical Thinking and Originality Muhammad Sajjad Akbar a a University of Sydney, Australia; ARTICLE HISTORY Compiled April 1, 2025 ABSTRACT The growing prevalence of generative AI tools such as ChatGPT has raised urgent concerns about their impact on student learning, particularly their potential to erode critical thinking and creativity in academic contexts. As students increasingly use these tools to complete assessments, foundational cognitive skills are at risk of being bypassed, challenging the integrity of higher education and the authenticity of student work. Current AI-generated text detection tools are fundamentally inadequate in addressing this challenge. They produce unreliable, unverifiable outputs and are highly susceptible to false positives and false negatives, especially when students apply obfuscation techniques such as paraphrasing, translation, or structural rewording. These tools rely on shallow statistical features rather than contextual or semantic understanding, making them unsuitable as definitive indicators of AI misuse. In response, this research proposes an AI-resilient, assessment-based solution that shifts focus from reactive detection to proactive assessment design. The solution is delivered through a web-based Python tool that integrates Bloom's Taxonomy with advanced natural language processing techniques including GPT-3.5 Turbo, BERT-based semantic similarity, and TF-IDF metrics to evaluate the AI-solvability of assignment tasks. By analyzing both surface-level and semantic features, the tool helps educators assess whether a task targets lower-order thinking (e.g., recall, summarization), which is more easily completed by AI, or higher-order skills (e.g., analysis, evaluation, creation), which are more resistant to AI automation. This framework empowers educators to intentionally design cognitively demanding AI-resistant assessments that promote originality, critical thinking, and fairness. By addressing the design of root issue assessment rather than relying on flawed detection tools, this research contributes a sustainable and pedagogically sound strategy to uphold academic standards and foster authentic learning in the era of AI. KEYWORDS Generative AI; ChatGPT; AI-resilient; Bloom's Taxonomy; Automated Assessments; AI-solvability;Automated Feedback; appendices 1. Introduction Integrating AI-technology with innovative thinking skills in higher education (HE) environment has grown more challenging due to rapid digital innovation and ubiquitous data availability. In applied education, innovative thinking is essential. It is charac-CONTACT Muhammad Sajjad Akbar. It entails thinking creatively to come up with original solutions to issues, enhance workflows, or open up new possibilities.


UK universities warned to 'stress-test' assessments as 92% of students use AI

The Guardian

British universities have been warned to "stress-test" all assessments after new research revealed "almost all" undergraduates are using generative artificial intelligence (genAI) in their studies. A survey of 1,000 students – both domestic and international – found there had been an "explosive increase" in the use of genAI in the past 12 months. Almost nine out of 10 (88%) in the 2025 poll said they used tools such as ChatGPT for their assessments, up from 53% last year. The proportion using any AI tool surged from 66% in 2024 to 92% in 2025, meaning just 8% of students are not using AI, according to a report published by the Higher Education Policy Institute and Kortext, a digital etextbook provider. Josh Freeman, the report's author, said such dramatic changes in behaviour in just 12 months were almost unheard of, and warned: "Universities should take heed: generative AI is here to stay. "There are urgent lessons here for institutions," Freeman said. "Every assessment must be reviewed in case it can be completed easily using AI.


From Prohibition to Adoption: How Hong Kong Universities Are Navigating ChatGPT in Academic Workflows

Huang, Junjun, Wu, Jifan, Wang, Qing, Yuan, Kemeng, Li, Jiefeng, Lu, Di

arXiv.org Artificial Intelligence

This paper aims at comparing the time when Hong Kong universities used to ban ChatGPT to the current periods where it has become integrated in the academic processes. Bolted by concerns of integrity and ethical issues in technologies, institutions have adapted by moving towards the center adopting AI literacy and responsibility policies. This study examines new paradigms which have been developed to help implement these positives while preventing negative effects on academia. Keywords: ChatGPT, Academic Integrity, AI Literacy, Ethical AI Use, Generative AI in Education, University Policy, AI Integration in Academia, Higher Education and Technology


On Perception of Prevalence of Cheating and Usage of Generative AI

Denkin, Roman

arXiv.org Artificial Intelligence

This report investigates the perceptions of teaching staff on the prevalence of student cheating and the impact of Generative AI on academic integrity. Data was collected via an anonymous survey of teachers at the Department of Information Technology at Uppsala University and analyzed alongside institutional statistics on cheating investigations from 2004 to 2023. The results indicate that while teachers generally do not view cheating as highly prevalent, there is a strong belief that its incidence is increasing, potentially due to the accessibility of Generative AI. Most teachers do not equate AI usage with cheating but acknowledge its widespread use among students. Furthermore, teachers' perceptions align with objective data on cheating trends, highlighting their awareness of the evolving landscape of academic dishonesty.


Generative AI in Higher Education: A Global Perspective of Institutional Adoption Policies and Guidelines

Jin, Yueqiao, Yan, Lixiang, Echeverria, Vanessa, Gašević, Dragan, Martinez-Maldonado, Roberto

arXiv.org Artificial Intelligence

Integrating generative AI (GAI) into higher education is crucial for preparing a future generation of GAI-literate students. Yet a thorough understanding of the global institutional adoption policy remains absent, with most of the prior studies focused on the Global North and the promises and challenges of GAI, lacking a theoretical lens. This study utilizes the Diffusion of Innovations Theory to examine GAI adoption strategies in higher education across 40 universities from six global regions. It explores the characteristics of GAI innovation, including compatibility, trialability, and observability, and analyses the communication channels and roles and responsibilities outlined in university policies and guidelines. The findings reveal a proactive approach by universities towards GAI integration, emphasizing academic integrity, teaching and learning enhancement, and equity. Despite a cautious yet optimistic stance, a comprehensive policy framework is needed to evaluate the impacts of GAI integration and establish effective communication strategies that foster broader stakeholder engagement. The study highlights the importance of clear roles and responsibilities among faculty, students, and administrators for successful GAI integration, supporting a collaborative model for navigating the complexities of GAI in education. This study contributes insights for policymakers in crafting detailed strategies for its integration.